Teacher-directed learning in view-independent face recognition with mixture of experts using single-view eigenspaces
نویسندگان
چکیده
We propose a new model for view-independent face recognition by multiview approach. We use the so-called ‘‘mixture of experts’’, ME, in which, the problem space is divided into several subspaces for the experts, and the outputs of experts are combined by a gating network. In our model, instead of leaving the ME to partition the face space automatically, the ME is directed to adapt to a particular partitioning corresponding to predetermined views. To force an expert towards a specific view of face, in the representation layer, we provide each expert with its own eigenspace computed from the faces in the corresponding view. Furthermore, we use teacher-directed learning, TDL, in a way that according to the pose of the input training sample, only the weights of the corresponding expert are updated. The experimental results support our claim that directing the experts to a predetermined partitioning of face space improves the performance of the conventional ME for viewindependent face recognition. In particular, for 1200 images of unseen intermediate views of faces from 20 subjects, the ME with single-view eigenspaces yields the average recognition rate of 80.51% in 10 trials, which is noticeably increased to 90.29% by applying the TDL method. r 2007 The Franklin Institute. Published by Elsevier Ltd. All rights reserved.
منابع مشابه
Teacher-directed learning in view-independent face recognition with mixture of experts using overlapping eigenspaces
A model for view-independent face recognition, based on Mixture of Experts, ME, is presented. In the basic form of ME the problem space is automatically divided into several subspaces for the experts, and the outputs of experts are combined by a gating network. In our proposed model, the ME is directed to adapt to a particular partitioning corresponding to predetermined views. To force an exper...
متن کاملTeacher-Directed Learning with Mixture of Experts for View-Independent Face Recognition
We propose two new models for view-independent face recognition, which lies under the category of multiview approaches. We use the so-called “mixture of experts” (MOE) in which, the problem space is divided into several subspaces for the experts, and then the outputs of experts are combined by a gating network to form the final output. Basically, our focus is on the way that the face space is p...
متن کاملView-independent face recognition with Mixture of Experts
A model for view-independent face recognition, based on Mixture of Experts, ME, is presented. Instead of allowing ME to partition the face space automatically, it is directed to adapt to a particular partitioning corresponding to predetermined views. Experimental results show that this model performs well in recognizing faces of intermediate unseen views. There are neurophysiological evidences ...
متن کاملEffect of Self-Directed Learning on the Components of Reading Comprehension
Research shows a positive relationship between self-directed learning (SDL) andreading comprehension. The present quasi-experimental study attempted to expandthe scope of SDL by investigating its effect on the components of readingcomprehension. Sixty high school students took the reading comprehension part ofPET as the pretest and the posttest. Over 16 weeks, the experimental group,consisting ...
متن کاملMixture of Experts for Persian handwritten word recognition
This paper presents the results of Persian handwritten word recognition based on Mixture of Experts technique. In the basic form of ME the problem space is automatically divided into several subspaces for the experts, and the outputs of experts are combined by a gating network. In our proposed model, we used Mixture of Experts Multi Layered Perceptrons with Momentum term, in the classification ...
متن کامل